1,548 research outputs found

    Globally Optimal Coupled Surfaces for Semi-automatic Segmentation of Medical Images

    Get PDF
    Manual delineations are of paramount importance in medical imaging, for instance to train supervised methods and evaluate automatic segmentation algorithms. In volumetric images, manually tracing regions of interest is an excruciating process in which much time is wasted labeling neighboring 2D slices that are similar to each other. Here we present a method to compute a set of discrete minimal surfaces whose boundaries are specified by user-provided segmentations on one or more planes. Using this method, the user can for example manually delineate one slice every n and let the algorithm complete the segmentation for the slices in between. Using a discrete framework, this method globally minimizes a cost function that combines a regularizer with a data term based on image intensities, while ensuring that the surfaces do not intersect each other or leave holes in between. While the resulting optimization problem is an integer program and thus NP-hard, we show that the equality constraint matrix is totally unimodular, which enables us to solve the linear program (LP) relaxation instead. We can then capitalize on the existence of efficient LP solvers to compute a globally optimal solution in practical times. Experiments on two different datasets illustrate the superiority of the proposed method over the use of independent, label-wise optimal surfaces (∼ 5% mean increase in Dice when one every six slices is labeled, with some structures improving up to ∼ 10% in Dice)

    Synth-by-Reg (SbR): Contrastive Learning for Synthesis-Based Registration of Paired Images

    Get PDF
    Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg

    Unsupervised learning for cross-domain medical image synthesis using deformation invariant cycle consistency networks

    Full text link
    Recently, the cycle-consistent generative adversarial networks (CycleGAN) has been widely used for synthesis of multi-domain medical images. The domain-specific nonlinear deformations captured by CycleGAN make the synthesized images difficult to be used for some applications, for example, generating pseudo-CT for PET-MR attenuation correction. This paper presents a deformation-invariant CycleGAN (DicycleGAN) method using deformable convolutional layers and new cycle-consistency losses. Its robustness dealing with data that suffer from domain-specific nonlinear deformations has been evaluated through comparison experiments performed on a multi-sequence brain MR dataset and a multi-modality abdominal dataset. Our method has displayed its ability to generate synthesized data that is aligned with the source while maintaining a proper quality of signal compared to CycleGAN-generated data. The proposed model also obtained comparable performance with CycleGAN when data from the source and target domains are alignable through simple affine transformations

    Prior-based Coregistration and Cosegmentation

    Get PDF
    We propose a modular and scalable framework for dense coregistration and cosegmentation with two key characteristics: first, we substitute ground truth data with the semantic map output of a classifier; second, we combine this output with population deformable registration to improve both alignment and segmentation. Our approach deforms all volumes towards consensus, taking into account image similarities and label consistency. Our pipeline can incorporate any classifier and similarity metric. Results on two datasets, containing annotations of challenging brain structures, demonstrate the potential of our method.Comment: The first two authors contributed equall

    Uncertainty-Aware Annotation Protocol to Evaluate Deformable Registration Algorithms

    Get PDF
    Landmark correspondences are a widely used type of gold standard in image registration. However, the manual placement of corresponding points is subject to high inter-user variability in the chosen annotated locations and in the interpretation of visual ambiguities. In this paper, we introduce a principled strategy for the construction of a gold standard in deformable registration. Our framework: (i) iteratively suggests the most informative location to annotate next, taking into account its redundancy with previous annotations; (ii) extends traditional pointwise annotations by accounting for the spatial uncertainty of each annotation, which can either be directly specified by the user, or aggregated from pointwise annotations from multiple experts; and (iii) naturally provides a new strategy for the evaluation of deformable registration algorithms. Our approach is validated on four different registration tasks. The experimental results show the efficacy of suggesting annotations according to their informativeness, and an improved capacity to assess the quality of the outputs of registration algorithms. In addition, our approach yields, from sparse annotations only, a dense visualization of the errors made by a registration method. The source code of our approach supporting both 2D and 3D data is publicly available at https://github.com/LoicPeter/evaluation-deformable-registration

    Thalamic nuclei in frontotemporal dementia: Mediodorsal nucleus involvement is universal but pulvinar atrophy is unique to C9orf72

    Get PDF
    Thalamic atrophy is a common feature across all forms of FTD but little is known about specific nuclei involvement. We aimed to investigate in vivo atrophy of the thalamic nuclei across the FTD spectrum. A cohort of 402 FTD patients (age: mean(SD) 64.3(8.2) years; disease duration: 4.8(2.8) years) was compared with 104 age‐matched controls (age: 62.5(10.4) years), using an automated segmentation of T1‐weighted MRIs to extract volumes of 14 thalamic nuclei. Stratification was performed by clinical diagnosis (180 behavioural variant FTD (bvFTD), 85 semantic variant primary progressive aphasia (svPPA), 114 nonfluent variant PPA (nfvPPA), 15 PPA not otherwise specified (PPA‐NOS), and 8 with associated motor neurone disease (FTD‐MND), genetic diagnosis (27 MAPT, 28 C9orf72, 18 GRN), and pathological confirmation (37 tauopathy, 38 TDP‐43opathy, 4 FUSopathy). The mediodorsal nucleus (MD) was the only nucleus affected in all FTD subgroups (16–33% smaller than controls). The laterodorsal nucleus was also particularly affected in genetic cases (28–38%), TDP‐43 type A (47%), tau‐CBD (44%), and FTD‐MND (53%). The pulvinar was affected only in the C9orf72 group (16%). Both the lateral and medial geniculate nuclei were also affected in the genetic cases (10–20%), particularly the LGN in C9orf72 expansion carriers. Use of individual thalamic nuclei volumes provided higher accuracy in discriminating between FTD groups than the whole thalamic volume. The MD is the only structure affected across all FTD groups. Differential involvement of the thalamic nuclei among FTD forms is seen, with a unique pattern of atrophy in the pulvinar in C9orf72 expansion carriers

    Hierarchical Joint Registration of Tissue Blocks With Soft Shape Constraints For Large-Scale Histology of The Human Brain

    Get PDF
    Large-scale 3D histology reconstruction of the human brain with MRI as volumetric reference generally requires reassembling the tissue blocks into the MRI space, prior to any further reconstruction of the histology of the individual blocks. This is a challenging registration problem, particularly in the frequent case that blockface photographs of paraffin embedded tissue are used as intermediate modality, as their contrast between white and gray matter is rather low. Here we propose a registration framework to address this problem, relying on two key components. First, blocks are simultaneously aligned to the MRI while exploiting the spatial constraints that they impose on each other, by means of a customized soft shape constraint (similarly to a jigsaw puzzle). And second, we adopt a hierarchical optimization strategy that capitalizes on our prior knowledge on the slicing and blocking procedure. Our framework is validated quantitatively on synthetic data, and qualitatively on the histology of a whole human hemisphere

    Towards segmentation and spatial alignment of the human embryonic brain using deep learning for atlas-based registration

    Full text link
    We propose an unsupervised deep learning method for atlas based registration to achieve segmentation and spatial alignment of the embryonic brain in a single framework. Our approach consists of two sequential networks with a specifically designed loss function to address the challenges in 3D first trimester ultrasound. The first part learns the affine transformation and the second part learns the voxelwise nonrigid deformation between the target image and the atlas. We trained this network end-to-end and validated it against a ground truth on synthetic datasets designed to resemble the challenges present in 3D first trimester ultrasound. The method was tested on a dataset of human embryonic ultrasound volumes acquired at 9 weeks gestational age, which showed alignment of the brain in some cases and gave insight in open challenges for the proposed method. We conclude that our method is a promising approach towards fully automated spatial alignment and segmentation of embryonic brains in 3D ultrasound

    Deep active learning for suggestive segmentation of biomedical image stacks via optimisation of Dice scores and traced boundary length

    Get PDF
    Manual segmentation of stacks of 2D biomedical images (e.g., histology) is a time-consuming task which can be sped up with semi-automated techniques. In this article, we present a suggestive deep active learning framework that seeks to minimise the annotation effort required to achieve a certain level of accuracy when labelling such a stack. The framework suggests, at every iteration, a specific region of interest (ROI) in one of the images for manual delineation. Using a deep segmentation neural network and a mixed cross-entropy loss function, we propose a principled strategy to estimate class probabilities for the whole stack, conditioned on heterogeneous partial segmentations of the 2D images, as well as on weak supervision in the form of image indices that bound each ROI. Using the estimated probabilities, we propose a novel active learning criterion based on predictions for the estimated segmentation performance and delineation effort, measured with average Dice scores and total delineated boundary length, respectively, rather than common surrogates such as entropy. The query strategy suggests the ROI that is expected to maximise the ratio between performance and effort, while considering the adjacency of structures that may have already been labelled – which decrease the length of the boundary to trace. We provide quantitative results on synthetically deformed MRI scans and real histological data, showing that our framework can reduce labelling effort by up to 60–70% without compromising accuracy

    Consistency-based Semi-supervised Active Learning: Towards Minimizing Labeling Cost

    Full text link
    Active learning (AL) combines data labeling and model training to minimize the labeling cost by prioritizing the selection of high value data that can best improve model performance. In pool-based active learning, accessible unlabeled data are not used for model training in most conventional methods. Here, we propose to unify unlabeled sample selection and model training towards minimizing labeling cost, and make two contributions towards that end. First, we exploit both labeled and unlabeled data using semi-supervised learning (SSL) to distill information from unlabeled data during the training stage. Second, we propose a consistency-based sample selection metric that is coherent with the training objective such that the selected samples are effective at improving model performance. We conduct extensive experiments on image classification tasks. The experimental results on CIFAR-10, CIFAR-100 and ImageNet demonstrate the superior performance of our proposed method with limited labeled data, compared to the existing methods and the alternative AL and SSL combinations. Additionally, we study an important yet under-explored problem -- "When can we start learning-based AL selection?". We propose a measure that is empirically correlated with the AL target loss and is potentially useful for determining the proper starting point of learning-based AL methods.Comment: Accepted by ECCV202
    corecore